AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
DPO Fine-tuning Technique

# DPO Fine-tuning Technique

Mathhermes 2.5 Mistral 7B
Apache-2.0
OpenHermes 2.5 is a large language model based on the Mistral-7B architecture, optimized for mathematical capabilities using DPO technology and supporting multi-turn dialogue interactions in ChatML format.
Large Language Model Transformers English
M
simonveitner
24
1
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase